Towards Autonomous, Perceptive, and Intelligent Virtual Actors
Identifieur interne : 000E23 ( Main/Exploration ); précédent : 000E22; suivant : 000E24Towards Autonomous, Perceptive, and Intelligent Virtual Actors
Auteurs : Daniel Thalmann [Suisse] ; Hansrudi Noser [Suisse]Source :
- Lecture Notes in Computer Science [ 0302-9743 ] ; 1999.
English descriptors
- Teeft :
- Acoustic environment, Actor, Actor memorizes, Actual position, Angular speed, Animation, Animation system, Artificial fishes, Artificial life, Automatic derivation, Automaton, Autonomous, Autonomous actor, Autonomous actors, Autonomous agents, Behavior control, Behavioral, Behavioral animation, Behavioral model, Behavioral response, Boulic, Collision, Color coding, Complex environments, Computer animation, Computer graphics, Current position, Current velocity, Curvature cost, Digital actors, Expressive animation, Force field model, Force fields, Game strategy, Geometrical collision detection, Global, Global force field, Global navigation, Graphics, Hearing sensor, High level behavior, Ieee computer society press, Impact point, Interactive, Interactive television, Interactive user, Internal state, Local navigation, Local navigation algorithm, Magnenat, Magnenat thalmann, Modeling, Module, Navigation, Next automaton, Noser, Other objects, Other particles, Parameter space, Particle dynamics, Particle system, Pixel, Production rules, Propagation medium, Sensor, Sensor points, Simple method, Simulation, Sound event, Sound event handler, Sound event table, Sound events, Sound library, Sound sources, Sparse foothold locations, Special functions, Speech recognition, State variables, Synthetic actor, Synthetic sensors, Synthetic vision, Tennis court, Tennis game, Thalmann, Time step, Tlie, Touch sensors, Trajectory, Turtle position, Unexpected obstacles, View angle, Virtual, Virtual actors, Virtual environment, Virtual environments, Virtual humans, Virtual life, Virtual reality, Virtual sensors, Virtual vision, Virtual world, Virtual worlds, Vision state, Vision system, Vision window, Visual computer, Visual memory, Wind force fields, World modeling.
Abstract
Abstract: This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interactive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric, physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors, and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and in more details sensor-based tennis.
Url:
DOI: 10.1007/3-540-48317-9_19
Affiliations:
Links toward previous steps (curation, corpus...)
- to stream Istex, to step Corpus: 001285
- to stream Istex, to step Curation: 001194
- to stream Istex, to step Checkpoint: 000C11
- to stream Main, to step Merge: 000E24
- to stream Main, to step Curation: 000E23
Le document en format XML
<record><TEI wicri:istexFullTextTei="biblStruct:series"><teiHeader><fileDesc><titleStmt><title xml:lang="en">Towards Autonomous, Perceptive, and Intelligent Virtual Actors</title>
<author><name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</author>
<author><name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE</idno>
<date when="1999" year="1999">1999</date>
<idno type="doi">10.1007/3-540-48317-9_19</idno>
<idno type="url">https://api.istex.fr/document/B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001285</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">001285</idno>
<idno type="wicri:Area/Istex/Curation">001194</idno>
<idno type="wicri:Area/Istex/Checkpoint">000C11</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">000C11</idno>
<idno type="wicri:doubleKey">0302-9743:1999:Thalmann D:towards:autonomous:perceptive</idno>
<idno type="wicri:Area/Main/Merge">000E24</idno>
<idno type="wicri:Area/Main/Curation">000E23</idno>
<idno type="wicri:Area/Main/Exploration">000E23</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title level="a" type="main" xml:lang="en">Towards Autonomous, Perceptive, and Intelligent Virtual Actors</title>
<author><name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
<affiliation wicri:level="3"><country xml:lang="fr">Suisse</country>
<wicri:regionArea>Computer Graphics Lab, EPFL - LIG, Lausanne</wicri:regionArea>
<placeName><settlement type="city">Lausanne</settlement>
<region nuts="3" type="region">Canton de Vaud</region>
</placeName>
</affiliation>
<affiliation wicri:level="1"><country wicri:rule="url">Suisse</country>
</affiliation>
</author>
<author><name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<affiliation wicri:level="4"><orgName type="university">Université de Zurich</orgName>
<country>Suisse</country>
<placeName><settlement type="city">Zurich</settlement>
<region nuts="3" type="region">Canton de Zurich</region>
</placeName>
</affiliation>
<affiliation wicri:level="1"><country wicri:rule="url">Suisse</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series><title level="s">Lecture Notes in Computer Science</title>
<title level="s" type="sub">Lecture Notes in Artificial Intelligence</title>
<imprint><date>1999</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="ISSN">0302-9743</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="Teeft" xml:lang="en"><term>Acoustic environment</term>
<term>Actor</term>
<term>Actor memorizes</term>
<term>Actual position</term>
<term>Angular speed</term>
<term>Animation</term>
<term>Animation system</term>
<term>Artificial fishes</term>
<term>Artificial life</term>
<term>Automatic derivation</term>
<term>Automaton</term>
<term>Autonomous</term>
<term>Autonomous actor</term>
<term>Autonomous actors</term>
<term>Autonomous agents</term>
<term>Behavior control</term>
<term>Behavioral</term>
<term>Behavioral animation</term>
<term>Behavioral model</term>
<term>Behavioral response</term>
<term>Boulic</term>
<term>Collision</term>
<term>Color coding</term>
<term>Complex environments</term>
<term>Computer animation</term>
<term>Computer graphics</term>
<term>Current position</term>
<term>Current velocity</term>
<term>Curvature cost</term>
<term>Digital actors</term>
<term>Expressive animation</term>
<term>Force field model</term>
<term>Force fields</term>
<term>Game strategy</term>
<term>Geometrical collision detection</term>
<term>Global</term>
<term>Global force field</term>
<term>Global navigation</term>
<term>Graphics</term>
<term>Hearing sensor</term>
<term>High level behavior</term>
<term>Ieee computer society press</term>
<term>Impact point</term>
<term>Interactive</term>
<term>Interactive television</term>
<term>Interactive user</term>
<term>Internal state</term>
<term>Local navigation</term>
<term>Local navigation algorithm</term>
<term>Magnenat</term>
<term>Magnenat thalmann</term>
<term>Modeling</term>
<term>Module</term>
<term>Navigation</term>
<term>Next automaton</term>
<term>Noser</term>
<term>Other objects</term>
<term>Other particles</term>
<term>Parameter space</term>
<term>Particle dynamics</term>
<term>Particle system</term>
<term>Pixel</term>
<term>Production rules</term>
<term>Propagation medium</term>
<term>Sensor</term>
<term>Sensor points</term>
<term>Simple method</term>
<term>Simulation</term>
<term>Sound event</term>
<term>Sound event handler</term>
<term>Sound event table</term>
<term>Sound events</term>
<term>Sound library</term>
<term>Sound sources</term>
<term>Sparse foothold locations</term>
<term>Special functions</term>
<term>Speech recognition</term>
<term>State variables</term>
<term>Synthetic actor</term>
<term>Synthetic sensors</term>
<term>Synthetic vision</term>
<term>Tennis court</term>
<term>Tennis game</term>
<term>Thalmann</term>
<term>Time step</term>
<term>Tlie</term>
<term>Touch sensors</term>
<term>Trajectory</term>
<term>Turtle position</term>
<term>Unexpected obstacles</term>
<term>View angle</term>
<term>Virtual</term>
<term>Virtual actors</term>
<term>Virtual environment</term>
<term>Virtual environments</term>
<term>Virtual humans</term>
<term>Virtual life</term>
<term>Virtual reality</term>
<term>Virtual sensors</term>
<term>Virtual vision</term>
<term>Virtual world</term>
<term>Virtual worlds</term>
<term>Vision state</term>
<term>Vision system</term>
<term>Vision window</term>
<term>Visual computer</term>
<term>Visual memory</term>
<term>Wind force fields</term>
<term>World modeling</term>
</keywords>
</textClass>
<langUsage><language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Abstract: This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interactive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric, physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors, and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and in more details sensor-based tennis.</div>
</front>
</TEI>
<affiliations><list><country><li>Suisse</li>
</country>
<region><li>Canton de Vaud</li>
<li>Canton de Zurich</li>
</region>
<settlement><li>Lausanne</li>
<li>Zurich</li>
</settlement>
<orgName><li>Université de Zurich</li>
</orgName>
</list>
<tree><country name="Suisse"><region name="Canton de Vaud"><name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</region>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</country>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000E23 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000E23 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Wicri/Sarre |area= MusicSarreV3 |flux= Main |étape= Exploration |type= RBID |clé= ISTEX:B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE |texte= Towards Autonomous, Perceptive, and Intelligent Virtual Actors }}
This area was generated with Dilib version V0.6.33. |